31 research outputs found

    Color image registration under illumination changes

    Get PDF
    The estimation of parametric global motion has had a significant attention during the last two decades, but despite the great efforts invested, there are still open issues. One of the most important ones is related to the ability to recover large deformation between images in the presence of illumination changes while kipping accurate estimates. Illumination changes in color images are another important open issue. In this paper, a Generalized least squared-based motion estimator is used in combination with color image model to allow accurate estimates of global motion between two color images under the presence of large geometric transformation and illumination changes. Experiments using challenging images have been performed showing that the presented technique is feasible and provides accurate estimates of the motion and illumination parameter

    Thermal and hydrolytic degradation of electrospun fish gelatin membranes

    Get PDF
    The thermal and hydrolytic degradation of electrospun gelatin membranes cross-linked with glutaraldehyde in vapor phase has been studied. In vitro degradation of gelatin membranes was evaluated in phosphate buffer saline solution at 37 °C. After 15 days under these conditions, a weight loss of 68% was observed, attributed to solvation and depolymerization of the main polymeric chains. Thermal degradation kinetics of the gelatin raw material and as-spun electrospun membranes showed that the electrospinning processing conditions do not influence polymer degradation. However, for cross-linked samples a decrease in the activation energy was observed, associated with the effect of glutaraldehyde cross-linking reaction in the inter- and intra-molecular hydrogen bonds of the protein. It is also shown that the electrospinning process does not affect the formation of the helical structure of gelatin chainsThis work was supported by FEDER through the COMPETE Program and by the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Project PEST-C/FIS/UI607/2011 and by projects project references NANO/NMed-SD/0156/2007 and PTDC/CTM-NAN/112574/2009. The authors also thank support from the COST Action MP1003, 2010 'European Scientific Network for Artificial Muscles'. DMC, JP and VS would like to acknowledge the FCT for the SFRH/BD/82411/2011, SFRH/BD/64901/2009 and SFRH/BPD/64958/2009 grants respectively

    Coloring local feature extraction

    Get PDF
    International audienceAlthough color is commonly experienced as an indispensable quality in describing the world around us, state-of-the art local feature-based representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applicability of the color descriptor, it should be robust to : 1. photometric changes commonly encountered in the real world, 2. varying image quality, from high quality images to snap-shot photo quality and compressed internet images. Based on these requirements we derive a set of color descriptors. The set of proposed descriptors are compared by extensive testing on multiple applications areas, namely, matching, retrieval and classification, and on a wide variety of image qualities. The results show that color descriptors remain reliable under photometric and geometrical changes, and with decreasing image quality. For all experiments a combination of color and shape outperforms a pure shape-based approach

    Moment invariants for recognition under changing viewpoint and illumination

    No full text
    Generalised color moments combine shape and color information and put them on an equal footing. Rational expressions of such moments can be designed, that are invariant under both geometric deformations and photometric changes. These generalised color moment invariants are effective features for recognition under changing viewpoint and illumination. The paper gives a systematic overview of such moment invariants for several combinations of deformations and photometric changes. Their validity and potential is corroborated through a series of experiments. Both the cases of indoor and outdoor images are considered, as illumination changes tend to differ between these circumstances. Although the generalised color moment invariants are extracted from planar surface patches, it is argued that invariant neighbourhoods offer a concept through which they can also be used to deal with 3D objects and scenes. (C) 2003 Elsevier Inc. All rights reserved.status: publishe

    Recognizing Color Patterns Irrespective of Viewpoint and Illumination

    No full text
    New invariant features are presented that can be used for the recognition of planar color patterns such as labels, logos, signs, pictograms, etc., irrespective of the viewpoint or the illumination conditions, and without the need for error prone contour extraction. The new features are based on moments of powers of the intensities in the individual color bands and combinations thereof. These moments implicitly characterize the shape, the intensity and the color distribution of the pattern in a uniform manner. The paper gives a classification of all functions of such moments which are invariant under both affine deformations of the pattern (thus achieving viewpoint invariance) as well as linear changes of the intensity values of the color bands (hence, coping with changes in the irradiance pattern due to different lighting conditions and/or viewpoints). The discriminant power and classification performance of the new invariants for color pattern recognition is tested on a data set of im..

    Adaptive Line Matching for Low-Textured Images

    No full text

    A printer indexing system for color calibration with applications in dietary assessment

    No full text
    In image based dietary assessment, color is a very important feature in food identification. One issue with using color in image analysis in the calibration of the color imaging capture system. In this paper we propose an indexing system for color camera calibration using printed color checkerboards also known as fiducial markers (FMs). To use the FM for color calibration one must know which printer was used to print the FM so that the correct color calibration matrix can be used for calibration. We have designed a printer indexing scheme that allows one to determine which printer was used to print the FM based on a unique arrangement of color squares and binarized marks (used for error control) printed on the FM. Using normalized cross correlation and pattern detection, the index corresponding to the printer for a particular FM can be determined. Our experimental results show this scheme is robust against most types of lighting conditions

    3D Wide Baseline Correspondences using Depth-maps

    No full text
    Points matching between two or more images of a scene shot from different viewpoints is the crucial step to defining epipolar geometry between views, recover the camera’s egomotion or build a 3D model of the framed scene. Unfortunately in most of common cases robust correspondences between points in different images can be defined only when small variations in viewpoint position, focal length or lighting are present between images. While in all the other conditions ad-hoc assumptions on the 3D scene or just weak correspondences can be used. In this paper, we present a novel matching method where depth-maps, nowadays available from cheap and off the shelf devices, are integrated with 2D images to provide robust descriptors even when wide baseline or strong lighting variations are present

    Surf: Speeded up robust features

    No full text
    Abstract. In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF’s strong performance.
    corecore